Optimization Approaches for Learning with Low-rank Regularization
نویسندگان
چکیده
Low-rank modeling has a lot of important applications in machine learning, computer vision and social network analysis. As direct rank minimization is NP hard, many alternative choices have been proposed. In this survey, we first introduce optimization approaches for two popular methods on rank minimization, i.e., nuclear norm regularization and rank constraint. Nuclear norm is the tightest convex envelope of rank function, therefore low-rank optimization using nuclear norm regularization is a convex problem where many convex optimization approaches can be applied. When rank constraint is used, the resulting optimization problems become simpler but are generally non-convex. Thus, algorithms working for these problems are lack of global optimal and may suffer from slow convergence. Except above two common approaches, adaptive non-convex regularizers have recently been proposed, which can better fit singular values. The key idea behind these regularizers is that large singular values are more informative, and thus should be less penalized. The optimization problems are neither smooth nor convex, thus are harder than with nuclear norm regularization or rank constraint. Several algorithms are developed recently which can be applied on these problems, and they are introduced in this survey. Helpful remarks are given for algorithms working within same type of regularizer; and then experiments are performed with both synthetic and real data sets to compare above three different types of regularizers. Finally, we discuss some possible research issues.
منابع مشابه
Online Learning in the Embedded Manifold of Low-rank Matrices
When learning models that are represented in matrix forms, enforcing a low-rank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model. However, naive approaches to minimizing functions over the set of low-rank matrices are either prohibitively time consuming (repeated singular value decomposition of the matrix) or numerical...
متن کاملReexamining Low Rank Matrix Factorization for Trace Norm Regularization
Trace norm regularization is a widely used approach for learning low rank matrices. A standard optimization strategy is based on formulating the problem as one of low rank matrix factorization which, however, leads to a non-convex problem. In practice this approach works well, and it is often computationally faster than standard convex solvers such as proximal gradient methods. Nevertheless, it...
متن کاملOnline Learning in The Manifold of Low-Rank Matrices
When learning models that are represented in matrix forms, enforcing a low-rank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model. However, naive approaches for minimizing functions over the set of low-rank matrices are either prohibitively time consuming (repeated singular value decomposition of the matrix) or numerica...
متن کاملA Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices
We propose a general and efficient algorithm for learning low-rank matrices. The proposed algorithm converges super-linearly and can keep the matrix to be learned in a compact factorized representation without the need of specifying the rank beforehand. Moreover, we show that the framework can be easily generalized to the problem of learning multiple matrices and general spectral regularization...
متن کاملAlgorithms for Matrix Completion by Yu Xin
We consider collaborative filtering methods for matrix completion. A typical approach is to find a low rank matrix that matches the observed ratings. However, the corresponding problem has local optima. In this thesis, we study two approaches to remedy this issue: reference vector method and trace norm regularization. The reference vector method explicitly constructs user and item features base...
متن کامل